Facial Expression Recognition with Keras

python
deep learning
machine learning
cnn
neural network
keras
tensorflow
face recognition
Author

kakamana

Published

June 2, 2023

Facial Expression Recognition with Keras

This Facial Expression Recognition with Keras is part of Coursera Project: Facial Expression Recognition with Keras. We will build and train a convolutional neural network (CNN) in Keras from scratch to recognize facial expressions. The data consists of 48x48 pixel grayscale images of faces. The objective is to classify each face based on the emotion shown in the facial expression into one of seven categories (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral). You will use OpenCV to automatically detect faces in images and draw bounding boxes around them. Once you have trained, saved, and exported the CNN, you will directly serve the trained model predictions to a web interface and perform real-time facial expression recognition on video and image data.

This is my learning experience of data science through DeepLearning.AI. These repository contributions are part of my learning journey through my graduate program masters of applied data sciences (MADS) at University Of Michigan, DeepLearning.AI, Coursera & DataCamp. You can find my similar articles & more stories at my medium & LinkedIn profile. I am available at kaggle & github blogs & github repos. Thank you for your motivation, support & valuable feedback.

These include projects, coursework & notebook which I learned through my data science journey. They are created for reproducible & future reference purpose only. All source code, slides or screenshot are intellectual property of respective content authors. If you find these contents beneficial, kindly consider learning subscription from DeepLearning.AI Subscription, Coursera, DataCamp

Facial Expression Recognition with Keras

Practice Project: Facial Expression Recognition with Keras

Outline

### Task 1: Import Libraries

Code
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import utils
import os
%matplotlib inline

import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Dense, Input, Dropout,Flatten, Conv2D
from tensorflow.keras.layers import BatchNormalization, Activation, MaxPooling2D
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from tensorflow.keras.utils import plot_model
devices = tf.config.list_physical_devices()
print(devices)

from IPython.display import SVG, Image
#from livelossplot import PlotLossesTensorFlowKeras
from livelossplot import PlotLossesKerasTF

print("Tensorflow version:", tf.__version__)
Code
# Specify the GPU device to use
gpus = tf.config.list_physical_devices('GPU')
if gpus:
  # Set the GPU memory growth to True
  try:
    tf.config.experimental.set_memory_growth(gpus[0], True)
  except RuntimeError as e:
    print(e)

### Task 2: Plot Sample Images

Code
utils.datasets.fer.plot_example_images(plt).show()
Code
for expression in os.listdir("train/"):
    print(str(len(os.listdir("train/" + expression))) + " " + expression + " images")

### Task 3: Generate Training and Validation Batches

Code
img_size = 48
batch_size = 64

datagen_train = ImageDataGenerator(horizontal_flip=True)

train_generator = datagen_train.flow_from_directory("train/",
                                                    target_size=(img_size,img_size),
                                                    color_mode="grayscale",
                                                    batch_size=batch_size,
                                                    class_mode='categorical',
                                                    shuffle=True)

datagen_validation = ImageDataGenerator(horizontal_flip=True)
validation_generator = datagen_validation.flow_from_directory("test/",
                                                    target_size=(img_size,img_size),
                                                    color_mode="grayscale",
                                                    batch_size=batch_size,
                                                    class_mode='categorical',
                                                    shuffle=False)

### Task 4: Create Convolution Neural Network (CNN) Model

Inspired by Goodfellow, I.J., et.al. (2013). Challenged in representation learning: A report of three machine learning contests. Neural Networks, 64, 59-63. doi:10.1016/j.neunet.2014.09.005

Code
# Initialising the CNN
model = Sequential()

# 1 - Convolution
model.add(Conv2D(64,(3,3), padding='same', input_shape=(48, 48,1)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

# 2nd Convolution layer
model.add(Conv2D(128,(5,5), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

# 3rd Convolution layer
model.add(Conv2D(512,(3,3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

# 4th Convolution layer
model.add(Conv2D(512,(3,3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

# Flattening
model.add(Flatten())

# Fully connected layer 1st layer
model.add(Dense(256))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))

# Fully connected layer 2nd layer
model.add(Dense(512))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))

model.add(Dense(7, activation='softmax'))

opt = Adam(lr=0.0005)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()

### Task 5: Train and Evaluate Model

Code
%%time

epochs = 15
steps_per_epoch = train_generator.n//train_generator.batch_size
validation_steps = validation_generator.n//validation_generator.batch_size

reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1,
                              patience=2, min_lr=0.00001, mode='auto')
checkpoint = ModelCheckpoint("model_weights.h5", monitor='val_accuracy',
                             save_weights_only=True, mode='max', verbose=1)
callbacks = [PlotLossesKerasTF(), checkpoint, reduce_lr]

history = model.fit(
    x=train_generator,
    steps_per_epoch=steps_per_epoch,
    epochs=epochs,
    validation_data = validation_generator,
    validation_steps = validation_steps,
    callbacks=callbacks
)

### Task 6: Represent Model as JSON String

Code
model_json = model.to_json()
with open("model.json", "w") as json_file:
    json_file.write(model_json)
#

Below sections are for blogging purpose Where I am writing code for model.py, camera.py, main.py & index.html for flask based application

First lets write code for model.py which used above generated model.json and try to predict emotions

from tensorflow.keras.models import model_from_json from tensorflow.python.keras.backend import set_session import numpy as np

import tensorflow as tf

config = tf.compat.v1.ConfigProto() config.gpu_options.per_process_gpu_memory_fraction = 0.15 session = tf.compat.v1.Session(config=config) set_session(session)

class FacialExpressionModel(object):

EMOTIONS_LIST = ["Angry", "Disgust",
                 "Fear", "Happy",
                 "Neutral", "Sad",
                 "Surprise"]

def __init__(self, model_json_file, model_weights_file):
    # load model from JSON file
    with open(model_json_file, "r") as json_file:
        loaded_model_json = json_file.read()
        self.loaded_model = model_from_json(loaded_model_json)

    # load weights into the new model
    self.loaded_model.load_weights(model_weights_file)
    #self.loaded_model.compile()
    #self.loaded_model._make_predict_function()

def predict_emotion(self, img):
    global session
    set_session(session)
    self.preds = self.loaded_model.predict(img)
    return FacialExpressionModel.EMOTIONS_LIST[np.argmax(self.preds)]

This code uses camera.py file to process camera to capture video or it can process provided video placed in videos folder

Here if we replace cv2.VideoCapture(‘videos/facial_exp.mkv’) with cv2.VideoCapture(0) then it will capture video stream from provided or connected camera.

import cv2 from model import FacialExpressionModel import numpy as np

facec = cv2.CascadeClassifier(‘haarcascade_frontalface_default.xml’) model = FacialExpressionModel(“model.json”, “model_weights.h5”) font = cv2.FONT_HERSHEY_SIMPLEX

class VideoCamera(object): def init(self): #self.video = cv2.VideoCapture(‘videos/facial_exp.mkv’) self.video = cv2.VideoCapture(0)

def __del__(self):
    self.video.release()

# returns camera frames along with bounding boxes and predictions
def get_frame(self):
    _, fr = self.video.read()
    gray_fr = cv2.cvtColor(fr, cv2.COLOR_BGR2GRAY)
    faces = facec.detectMultiScale(gray_fr, 1.3, 5)

    for (x, y, w, h) in faces:
        fc = gray_fr[y:y+h, x:x+w]

        roi = cv2.resize(fc, (48, 48))
        pred = model.predict_emotion(roi[np.newaxis, :, :, np.newaxis])

        cv2.putText(fr, pred, (x, y), font, 1, (255, 255, 0), 2)
        cv2.rectangle(fr,(x,y),(x+w,y+h),(255,0,0),2)

    _, jpeg = cv2.imencode('.jpg', fr)
    return jpeg.tobytes()

Below is basic flask application html to use to render an html version of emotion capture & its processing

Face expression recognition

Last but not least is the main.py routine which will initiate through terminal (in mac) or through anacoda shell in windows to launch facial expression routine

import socket

from flask import Flask, render_template, Response from camera import VideoCamera

app = Flask(name)

@app.route(‘/’) def index(): return render_template(‘index.html’)

def gen(camera): while True: frame = camera.get_frame() yield (b’–frame’ b’Content-Type: image/jpeg’ + frame + b’’)

@app.route(‘/video_feed’) def video_feed(): return Response(gen(VideoCamera()), mimetype=‘multipart/x-mixed-replace; boundary=frame’)

def home(): # Connect to a server conn = socket.create_connection((‘dummy-server.com’, 8080)) return ‘Connected to server’

if name == ‘main’: #app.run(host=‘0.0.0.0:52520’, debug=True) app.run(debug=True)